DO NOT SHOW THIS IN CLASS Today, Beyoncé is one of my favorite artists. She has already had a long career and has produced a number of albums, including Dangerously In Love, B-Day, I Am Sacha Fierce, 4, Beyoncé, Lemonade and The Lion King. One of the reasons I like Beyoncé and her music so much is because she has such a varied style in her albums and that’s why I can always rely on her music. On extravagant moments I like to listen to 4, if I want to listen to old pop classics like Halo, I put on I Am Sacha Fierce, and when I’m not feeling well I listen to Lemonade for a boost of self-confidence. Because I like Beyoncé’s style variation between albums so much, I think it would be nice to research with SpotifyR how her music has developed in recent years and how this is reflected in the characteristics of her albums. By using SpotifyR audio features like acousticness, danceability, energy, instrumentalness, key, liveness, loudness, mode, speechiness, tempo, and valence. Furthermore, I’d like to research if demographics have changed since her album Dangerously In Love has come out untill today. To dive a bit deeper in her work, I would like to compare earlier songs, for example Halo and more recent songs, like Drunk In Love.
## -- Attaching packages --------------------------------------- tidyverse 1.3.0 --
## v ggplot2 3.3.3 v purrr 0.3.4
## v tibble 3.0.6 v dplyr 1.0.3
## v tidyr 1.1.2 v stringr 1.4.0
## v readr 1.4.0 v forcats 0.5.1
## -- Conflicts ------------------------------------------ tidyverse_conflicts() --
## x dplyr::filter() masks stats::filter()
## x dplyr::lag() masks stats::lag()
##
## Attaching package: 'plotly'
## The following object is masked from 'package:ggplot2':
##
## last_plot
## The following object is masked from 'package:stats':
##
## filter
## The following object is masked from 'package:graphics':
##
## layout
In the scatterplot below, the energy, valence, acousticness, and danceability of Beyoncé’s songs are shown. Overall, Beyoncé’s songs have an acousticness below 0.5, which means low confidence that the track is acoustic. You can also clearly see the difference between Beyoncé’s albums Dangerously In Love, B-Day Deluxe Edition, and 4 in valence compared to the other albums. The former have a higher valence than the other albums.
I wanted to research speechiness, a.k.a. the presence of spoken words, in Beyoncé’s songs. According to Spotify, values between 0.33 and 0.66 describe tracks that both contain both music and speech, values above 0.66 describe tracks that are probably made entirely of spoken words, and tracks with values below 0.33 most likely contain music and other non-speech-like sounds. Based on the values, the differences between the speechiness scores and 0.33 are used, so the emphasis lies on rap songs vs. more instrumental songs. If the difference is above 0, the song probably consists (almost) entirely of spoken words. And the lower the values below 0, the less spoken words the songs contain. Now that I’ve run the code, I see that Beyoncé primarily has a speechiness value below 0.66. That’s why I’m thinking of changing the code so that values below 0.66 are more emphasized instead of values above 0.66.
I created this graph because I thought it would tell a lot about the albums. However, I’m not so sure if it’s useful… The only thing I make up of this information is that Beyoncé is varying a lot in the keys she uses in albums.
## # A tibble: 10 x 5
## # Groups: key [9]
## playlist_name key n total percent
## <chr> <int> <int> <int> <dbl>
## 1 Dangerously In Love 2 1 9 11
## 2 Dangerously In Love 6 3 16 19
## 3 Dangerously In Love 1 3 22 14
## 4 Dangerously In Love 0 3 16 19
## 5 Dangerously In Love 9 1 6 17
## 6 Dangerously In Love 7 1 16 6
## 7 Dangerously In Love 10 2 10 20
## 8 Dangerously In Love 11 1 8 12
## 9 B'Day Deluxe Edition 8 2 7 29
## 10 B'Day Deluxe Edition 7 7 16 44